Sample Efficient Reinforcement Learning with REINFORCE

نویسندگان

چکیده

Policy gradient methods are among the most effective for large-scale reinforcement learning, and their empirical success has prompted several works that develop foundation of global convergence theory. However, prior have either required exact gradients or state-action visitation measure based mini-batch stochastic with a diverging batch size, which limit applicability in practical scenarios. In this paper, we consider classical policy compute an approximate single trajectory fixed size trajectories under soft-max parametrization log-barrier regularization, along widely-used REINFORCE estimation procedure. By controlling number "bad" episodes resorting to doubling trick, establish anytime sub-linear high probability regret bound as well almost sure average asymptotically rate. These provide first set sample efficiency results well-known algorithm contribute better understanding its performance practice.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Sample Efficient Reinforcement Learning with Gaussian Processes

This paper derives sample complexity results for using Gaussian Processes (GPs) in both modelbased and model-free reinforcement learning (RL). We show that GPs are KWIK learnable, proving for the first time that a model-based RL approach using GPs, GP-Rmax, is sample efficient (PAC-MDP). However, we then show that previous approaches to model-free RL using GPs take an exponential number of step...

متن کامل

Efficient Bayes-Adaptive Reinforcement Learning using Sample-Based Search

Bayesian model-based reinforcement learning is a formally elegant approach to learning optimal behaviour under model uncertainty, trading off exploration and exploitation in an ideal way. Unfortunately, finding the resulting Bayes-optimal policies is notoriously taxing, since the search space becomes enormous. In this paper we introduce a tractable, sample-based method for approximate Bayesopti...

متن کامل

Sample-Efficient Reinforcement Learning through Transfer and Architectural Priors

Recent work in deep reinforcement learning has allowed algorithms to learn complex tasks such as Atari 2600 games just from the reward provided by the game, but these algorithms presently require millions of training steps in order to learn, making them approximately five orders of magnitude slower than humans. One reason for this is that humans build robust shared representations that are appl...

متن کامل

Sample-Efficient Evolutionary Function Approximation for Reinforcement Learning

Reinforcement learning problems are commonly tackled with temporal difference methods, which attempt to estimate the agent’s optimal value function. In most real-world problems, learning this value function requires a function approximator, which maps state-action pairs to values via a concise, parameterized function. In practice, the success of function approximators depends on the ability of ...

متن کامل

Sample-efficient Deep Reinforcement Learning for Dialog Control

Representing a dialog policy as a recurrent neural network (RNN) is attractive because it handles partial observability, infers a latent representation of state, and can be optimized with supervised learning (SL) or reinforcement learning (RL). For RL, a policy gradient approach is natural, but is sample inefficient. In this paper, we present 3 methods for reducing the number of dialogs require...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2021

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v35i12.17300